Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 118
Filter
1.
IEEE Transactions on Radiation and Plasma Medical Sciences ; : 1-1, 2023.
Article in English | Scopus | ID: covidwho-20244069

ABSTRACT

Automatic lung infection segmentation in computed tomography (CT) scans can offer great assistance in radiological diagnosis by improving accuracy and reducing time required for diagnosis. The biggest challenges for deep learning (DL) models in segmenting infection region are the high variances in infection characteristics, fuzzy boundaries between infected and normal tissues, and the troubles in getting large number of annotated data for training. To resolve such issues, we propose a Modified U-Net (Mod-UNet) model with minor architectural changes and significant modifications in the training process of vanilla 2D UNet. As part of these modifications, we updated the loss function, optimization function, and regularization methods, added a learning rate scheduler and applied advanced data augmentation techniques. Segmentation results on two Covid-19 Lung CT segmentation datasets show that the performance of Mod-UNet is considerably better than the baseline U-Net. Furthermore, to mitigate the issue of lack of annotated data, the Mod-UNet is used in a semi-supervised framework (Semi-Mod-UNet) which works on a random sampling approach to progressively enlarge the training dataset from a large pool of unannotated CT slices. Exhaustive experiments on the two Covid-19 CT segmentation datasets and on a real lung CT volume show that the Mod-UNet and Semi-Mod-UNet significantly outperform other state-of-theart approaches in automated lung infection segmentation. IEEE

2.
2023 9th International Conference on Advanced Computing and Communication Systems, ICACCS 2023 ; : 777-782, 2023.
Article in English | Scopus | ID: covidwho-20241024

ABSTRACT

Over the past few years, millions of people around the world have developed thoracic ailments. MRI, CT scan, reverse transcription, and other methods are among those used to detect thoracic disorders. These procedures demand medical knowledge and are exceedingly pricy and delicate. An alternate and more widely used method to diagnose diseases of the chest is X-ray imaging. The goal of this study was to increase detection precision in order to develop a computationally assisted diagnostic tool. Different diseases can be identified by combining radiological imaging with various artificial intelligence application approaches. In this study, transfer learning (TL) and capsule neural network techniques are used to propose a method for the automatic detection of various thoracic illnesses utilizing digitized chest X-ray pictures of suspected patients. Four public databases were combined to build a dataset for this purpose. Three pre trained convolutional neural networks (CNNs) were utilized in TL with augmentation as a preprocessing technique to train and evaluate the model. Pneumonia, COVID19, normal, and TB (Tb) were the four class classifiers used to train the network to categorize. © 2023 IEEE.

3.
Proceedings - 2022 2nd International Symposium on Artificial Intelligence and its Application on Media, ISAIAM 2022 ; : 135-139, 2022.
Article in English | Scopus | ID: covidwho-20236902

ABSTRACT

Deep learning (DL) approaches for image segmentation have been gaining state-of-the-art performance in recent years. Particularly, in deep learning, U-Net model has been successfully used in the field of image segmentation. However, traditional U-Net methods extract features, aggregate remote information, and reconstruct images by stacking convolution, pooling, and up sampling blocks. The traditional approach is very inefficient due of the stacked local operators. In this paper, we propose the multi-attentional U-Net that is equipped with non-local blocks based self-attention, channel-attention, and spatial-attention for image segmentation. These blocks can be inserted into U-Net to flexibly aggregate information on the plane and spatial scales. We perform and evaluate the multi-attentional U-Net model on three benchmark data sets, which are COVID-19 segmentation, skin cancer segmentation, thyroid nodules segmentation. Results show that our proposed models achieve better performances with faster computation and fewer parameters. The multi-attention U-Net can improve the medical image segmentation results. © 2022 IEEE.

4.
Cmc-Computers Materials & Continua ; 75(3):5717-5742, 2023.
Article in English | Web of Science | ID: covidwho-20232208

ABSTRACT

Coronavirus has infected more than 753 million people, ranging in severity from one person to another, where more than six million infected people died worldwide. Computer-aided diagnostic (CAD) with artificial intelligence (AI) showed outstanding performance in effectively diagnosing this virus in real-time. Computed tomography is a complementary diagnostic tool to clarify the damage of COVID-19 in the lungs even before symptoms appear in patients. This paper conducts a systematic literature review of deep learning methods for classifying the segmentation of COVID-19 infection in the lungs. We used the methodology of systematic reviews and meta-analyses (PRISMA) flow method. This research aims to systematically analyze the supervised deep learning methods, open resource datasets, data augmentation methods, and loss functions used for various segment shapes of COVID-19 infection from computerized tomography (CT) chest images. We have selected 56 primary studies relevant to the topic of the paper. We have compared different aspects of the algorithms used to segment infected areas in the CT images. Limitations to deep learning in the segmentation of infected areas still need to be developed to predict smaller regions of infection at the beginning of their appearance.

5.
Diagnostics (Basel) ; 13(10)2023 May 18.
Article in English | MEDLINE | ID: covidwho-20237170

ABSTRACT

The early diagnosis of infectious diseases is demanded by digital healthcare systems. Currently, the detection of the new coronavirus disease (COVID-19) is a major clinical requirement. For COVID-19 detection, deep learning models are used in various studies, but the robustness is still compromised. In recent years, deep learning models have increased in popularity in almost every area, particularly in medical image processing and analysis. The visualization of the human body's internal structure is critical in medical analysis; many imaging techniques are in use to perform this job. A computerized tomography (CT) scan is one of them, and it has been generally used for the non-invasive observation of the human body. The development of an automatic segmentation method for lung CT scans showing COVID-19 can save experts time and can reduce human error. In this article, the CRV-NET is proposed for the robust detection of COVID-19 in lung CT scan images. A public dataset (SARS-CoV-2 CT Scan dataset), is used for the experimental work and customized according to the scenario of the proposed model. The proposed modified deep-learning-based U-Net model is trained on a custom dataset with 221 training images and their ground truth, which was labeled by an expert. The proposed model is tested on 100 test images, and the results show that the model segments COVID-19 with a satisfactory level of accuracy. Moreover, the comparison of the proposed CRV-NET with different state-of-the-art convolutional neural network models (CNNs), including the U-Net Model, shows better results in terms of accuracy (96.67%) and robustness (low epoch value in detection and the smallest training data size).

6.
Diagnostics (Basel) ; 13(11)2023 Jun 02.
Article in English | MEDLINE | ID: covidwho-20235054

ABSTRACT

BACKGROUND AND MOTIVATION: Lung computed tomography (CT) techniques are high-resolution and are well adopted in the intensive care unit (ICU) for COVID-19 disease control classification. Most artificial intelligence (AI) systems do not undergo generalization and are typically overfitted. Such trained AI systems are not practical for clinical settings and therefore do not give accurate results when executed on unseen data sets. We hypothesize that ensemble deep learning (EDL) is superior to deep transfer learning (TL) in both non-augmented and augmented frameworks. METHODOLOGY: The system consists of a cascade of quality control, ResNet-UNet-based hybrid deep learning for lung segmentation, and seven models using TL-based classification followed by five types of EDL's. To prove our hypothesis, five different kinds of data combinations (DC) were designed using a combination of two multicenter cohorts-Croatia (80 COVID) and Italy (72 COVID and 30 controls)-leading to 12,000 CT slices. As part of generalization, the system was tested on unseen data and statistically tested for reliability/stability. RESULTS: Using the K5 (80:20) cross-validation protocol on the balanced and augmented dataset, the five DC datasets improved TL mean accuracy by 3.32%, 6.56%, 12.96%, 47.1%, and 2.78%, respectively. The five EDL systems showed improvements in accuracy of 2.12%, 5.78%, 6.72%, 32.05%, and 2.40%, thus validating our hypothesis. All statistical tests proved positive for reliability and stability. CONCLUSION: EDL showed superior performance to TL systems for both (a) unbalanced and unaugmented and (b) balanced and augmented datasets for both (i) seen and (ii) unseen paradigms, validating both our hypotheses.

7.
International Journal of Intelligent Engineering and Systems ; 16(3):565-578, 2023.
Article in English | Scopus | ID: covidwho-2323766

ABSTRACT

Coronavirus disease 2019 (COVID-19), the disease caused by severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2), has been spreading since 2019 until now. Chest CT-scan images have contributed significantly to the prognosis, diagnosis, and detection of complications in COVID-19. Automatic segmentation of COVID-19 infections involving ground-glass opacities and consolidation can assist radiologists in COVID-19 screening, which helps reduce time spent analyzing the infection. In this study, we proposed a novel deep learning network to segment lung damage caused by COVID-19 by utilizing EfficientNet and Resnet as the encoder and a modified U-Net with Swish activation, namely swishUnet, as the decoder. In particular, swishUnet allows the model to deal with smoothness, non-monotonicity, and one-sided boundedness at zero. Three experiments were conducted to evaluate the performance of the proposed architecture on the 100 CT scans and 9 volume CT scans from Italian the society of medical and interventional radiology. The results of the first experiment showed that the best sensitivity was 82.7% using the Resnet+swishUnet method with the Tversky loss function. In the second experiment, the architecture with basic Unet only got a sensitivity of 67.2. But with our proposed method, we can improve to 88.1% by using EfficientNet+SwishUnet. For the third experiment, the best performance sensitivity is Resnet+swishUnet with 79.8%. All models with SwishUnet have the same specificity where the value is 99.8%. From the experiments we conclude that our proposed method with SwishUnet encoder has better performance than the previous method © 2023, International Journal of Intelligent Engineering and Systems.All Rights Reserved.

8.
1st International Conference on Recent Trends in Microelectronics, Automation, Computing and Communications Systems, ICMACC 2022 ; : 167-173, 2022.
Article in English | Scopus | ID: covidwho-2325759

ABSTRACT

Lung segmentation is a process of detection and identification of lung cancer and pneumonia with the help of image processing techniques. Deep learning algorithms can be incorporated to build the computer-aided diagnosis (CAD) system for detecting or recognizing broad objects like acute respiratory distress syndrome (ARDS), Tuberculosis, Pneumonia, Lung cancer, Covid, and several other respiratory diseases. This paper presents pneumonia detection from lung segmentation using deep learning methods on chest radiography. Chest X-ray is the most useful technique among other existing techniques, due to its lesser cost. The main drawback of a chest x-ray is that it cannot detect all problems in the chest. Thus, implementing convolutional neural networks (CNN) to perform lung segmentation and to obtain correct results. The 'lost' regions of the lungs are reconstructed by an automatic segmentation method from raw images of chest X-ray. © 2022 IEEE.

9.
2022 International Conference of Advanced Technology in Electronic and Electrical Engineering, ICATEEE 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2316058

ABSTRACT

COVID-19, the new coronavirus, is a threat to global public health. Today, there is an urgent need for automatic COVID-19 infection detection tools. This work proposes an automatic COVID-19 infection detection system based on CT image segmentation. A deep learning network developed from an improved Residual U-net architecture extracts infected areas from a CT lung image. We tested the system on COVID-19 public CT images. An evaluation using the F1 score, sensitivity, specificity and accuracy proved the effectiveness of the proposed network. Besides, experimental results showed that the proposed network performed well in extracting infection regions so, it can assist experts in COVID-19 infection detection. © 2022 IEEE.

10.
2022 International Conference of Advanced Technology in Electronic and Electrical Engineering, ICATEEE 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2312477

ABSTRACT

The coronavirus disease has hardly affected medical healthcare systems worldwide. Physicians use radiological examinations as a primary clinical tool for diagnosing patients with suspected COVID-19 infection. Recently, deep learning approaches have further enhanced medical image processing and analysis, reduced the workload of radiologists, and improved the performance of radiology systems. This paper addresses medical image segmentation;we present a comparative performance study of four neural networks 'NN' models, U-Net, 3D-Unet, KiU-Net and SegNet, for aid diagnosis. Additionally, we present his 3D reconstruction of COVID-19 lesions and lungs and his AR platform with augmented reality, including AR visualization and interaction. Quantitative and qualitative assessments are provided for both contributions. The NN model performed well in the AI-COVID-19 diagnostic process. The AR-COVID-19 platform can be viewed as an ancillary diagnostic tool for medical practice. It serves as a tool to support radiologist visualization and reading. © 2022 IEEE.

11.
Diagnostics (Basel) ; 13(9)2023 May 08.
Article in English | MEDLINE | ID: covidwho-2312446

ABSTRACT

The disaster of the COVID-19 pandemic has claimed numerous lives and wreaked havoc on the entire world due to its transmissible nature. One of the complications of COVID-19 is pneumonia. Different radiography methods, particularly computed tomography (CT), have shown outstanding performance in effectively diagnosing pneumonia. In this paper, we propose a spatial attention and attention gate UNet model (SAA-UNet) inspired by spatial attention UNet (SA-UNet) and attention UNet (Att-UNet) to deal with the problem of infection segmentation in the lungs. The proposed method was applied to the MedSeg, Radiopaedia 9P, combination of MedSeg and Radiopaedia 9P, and Zenodo 20P datasets. The proposed method showed good infection segmentation results (two classes: infection and background) with an average Dice similarity coefficient of 0.85, 0.94, 0.91, and 0.93 and a mean intersection over union (IOU) of 0.78, 0.90, 0.86, and 0.87, respectively, on the four datasets mentioned above. Moreover, it also performed well in multi-class segmentation with average Dice similarity coefficients of 0.693, 0.89, 0.87, and 0.93 and IOU scores of 0.68, 0.87, 0.78, and 0.89 on the four datasets, respectively. Classification accuracies of more than 97% were achieved for all four datasets. The F1-scores for the MedSeg, Radiopaedia P9, combination of MedSeg and Radiopaedia P9, and Zenodo 20P datasets were 0.865, 0.943, 0.917, and 0.926, respectively, for the binary classification. For multi-class classification, accuracies of more than 96% were achieved on all four datasets. The experimental results showed that the framework proposed can effectively and efficiently segment COVID-19 infection on CT images with different contrast and utilize this to aid in diagnosing and treating pneumonia caused by COVID-19.

12.
Signal Image Video Process ; : 1-9, 2022 Jul 25.
Article in English | MEDLINE | ID: covidwho-2318423

ABSTRACT

Deep learning-based image segmentation models rely strongly on capturing sufficient spatial context without requiring complex models that are hard to train with limited labeled data. For COVID-19 infection segmentation on CT images, training data are currently scarce. Attention models, in particular the most recent self-attention methods, have shown to help gather contextual information within deep networks and benefit semantic segmentation tasks. The recent attention-augmented convolution model aims to capture long range interactions by concatenating self-attention and convolution feature maps. This work proposes a novel attention-augmented convolution U-Net (AA-U-Net) that enables a more accurate spatial aggregation of contextual information by integrating attention-augmented convolution in the bottleneck of an encoder-decoder segmentation architecture. A deep segmentation network (U-Net) with this attention mechanism significantly improves the performance of semantic segmentation tasks on challenging COVID-19 lesion segmentation. The validation experiments show that the performance gain of the attention-augmented U-Net comes from their ability to capture dynamic and precise (wider) attention context. The AA-U-Net achieves Dice scores of 72.3% and 61.4% for ground-glass opacity and consolidation lesions for COVID-19 segmentation and improves the accuracy by 4.2% points against a baseline U-Net and 3.09% points compared to a baseline U-Net with matched parameters. Supplementary Information: The online version contains supplementary material available at 10.1007/s11760-022-02302-3.

13.
Istanbul Medical Journal ; 24(1):40-47, 2023.
Article in English | Web of Science | ID: covidwho-2311726

ABSTRACT

Introduction: This study aimed to construct an artificial intelligence system to detect Coronavirus disease-2019 (COVID-19) pneumonia on computed tomography (CT) images and to test its diagnostic performance. Methods: Data were acquired between March 18-April 17, 2020. CT data of 269 reverse tran-scriptase-polymerase chain reaction proven patients were extracted, and 173 studies (122 for training, 51 testing) were finally used. Most typical lesions of COVID-19 pneumonia were la-beled by two radiologists using a custom tool to generate multiplanar ground-truth masks. Us-ing a patch size of 128x128 pixels, 18,255 axial, 71,458 coronal, and 72,721 sagittal patches were generated to train the datasets with the U-Net network. Lesions were extracted in the or-thogonal planes and filtered by lung segmentation. Sagittal and coronal predicted masks were reconverted to the axial plane and were merged into the intersect-ed axial mask using a voting scheme. Results: Based on the axial predicted masks, the sensitivity and specificity of the model were found as 91.4% and 99.9%, respectively. The total number of positive predictions has increased by 3.9% by the use of intersected predicted masks, whereas the total number of negative predic-tions has only slightly decreased by 0.01%. These changes have resulted in 91.5% sensitivity, 99.9% specificity, and 99.9% accuracy. Conclusion: This study has shown the reliability of the U-Net architecture in diagnosing typical pulmonary lesions of COVID-19 in CT images. It also showed a slightly favorable effect of the intersection method to increase the model's performance. Based on the performance level pre-sented, the model may be used in the rapid and accurate detection and characterization of the typical COVID-19 pneumonia to assist radiologists.

14.
Ieee Access ; 11:595-645, 2023.
Article in English | Web of Science | ID: covidwho-2311192

ABSTRACT

Biomedical image segmentation (BIS) task is challenging due to the variations in organ types, position, shape, size, scale, orientation, and image contrast. Conventional methods lack accurate and automated designs. Artificial intelligence (AI)-based UNet has recently dominated BIS. This is the first review of its kind that microscopically addressed UNet types by complexity, stratification of UNet by its components, addressing UNet in vascular vs. non-vascular framework, the key to segmentation challenge vs. UNet-based architecture, and finally interfacing the three facets of AI, the pruning, the explainable AI (XAI), and the AI-bias. PRISMA was used to select 267 UNet-based studies. Five classes were identified and labeled as conventional UNet, superior UNet, attention-channel UNet, hybrid UNet, and ensemble UNet. We discovered 81 variations of UNet by considering six kinds of components, namely encoder, decoder, skip connection, bridge network, loss function, and their combination. Vascular vs. non-vascular UNet architecture was compared. AP(ai)Bias 2.0-UNet was identified in these UNet classes based on (i) attributes of UNet architecture and its performance, (ii) explainable AI (XAI), and, (iii) pruning (compression). Five bias methods such as (i) ranking, (ii) radial, (iii) regional area, (iv) PROBAST, and (v) ROBINS-I were applied and compared using a Venn diagram. Vascular and non-vascular UNet systems dominated with sUNet classes with attention. Most of the studies suffered from a low interest in XAI and pruning strategies. None of the UNet models qualified to be bias-free. There is a need to move from paper-to-practice paradigms for clinical evaluation and settings.

15.
11th EAI International Conference on Context-Aware Systems and Applications, ICCASA 2022 ; 475 LNICST:102-111, 2023.
Article in English | Scopus | ID: covidwho-2292310

ABSTRACT

Today, the medical industry is promoting the research and application of artificial intelligence in disease diagnosis and treatment. The development of diagnostic methods with the support of electronic devices and information technology can help doctors save time in diagnosing and treating diseases, especially medical images. Diagnosis of lung lesions based on lung images is a case study. This paper proposed a method for lung lesion images classification based on modified U-Net and VGG-19 combined on adaboost techniques. The modified U-Net architecture with 5 pooling and 5 unpooling. It has the unpooling layer with kernels of size 2 × 2, stride 2 × 2 to get output consistent with the adaboost. The result of the proposed method is about 97.61% and better results than others in the Covid-19 radiography dataset. © 2023, ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering.

16.
IEEE Sensors Journal ; : 1-1, 2023.
Article in English | Scopus | ID: covidwho-2291171

ABSTRACT

Although medical imaging technology has persisted in evolving over the last decades, the techniques and technologies used for analytical and visualisation purposes have remained constant. Manual or semi-automatic segmentation is, in many cases, complicated. It requires the intervention of a specialist and is time-consuming, especially during the Coronavirus disease (COVID-19) pandemic, which has had devastating medical and economic consequences. Processing and visualising medical images with advanced techniques represent medical professionals’breakthroughs. This paper studies how augmented reality (AR) and artificial intelligence (AI) can transform medical practice during COVID-19 and post-COVID-19 pandemic. Here we report an augmented reality visualisation and interaction platform;it covers the whole process from uploading chest Ct-scan images to automatic segmentation-based deep learning, 3D reconstruction, 3D visualisation, and manipulation. AR provides a more realistic 3D visualisation system, allowing doctors to effectively interact with the generated 3D model of segmented lungs and COVID-19 lesions. We use the U-Net Neural Network (NN) for automated segmentation. The statistical measures obtained using the Dice score, pixel accuracy, sensitivity, G-mean, and specificity are 0.749, 0.949, 0.956, 0.955, and 0.954, respectively. The user-friendliness and usability are objectified by a formal user study that compared our augmented reality-assisted design to the standard diagnosis setup. One hundred and six doctors and medical students, including eight senior medical lecturers, volunteered to assess our platform. The platform could be used as an aid diagnosis tool to identify and analyse the COVID-19 infectious or as a training tool for residents and medical students. The prototype can be extended to other pulmonary pathologies. IEEE

17.
37th International Conference on Information Networking, ICOIN 2023 ; 2023-January:483-486, 2023.
Article in English | Scopus | ID: covidwho-2274087

ABSTRACT

Data collecting and sharing have been widely accepted and adopted to improve the performance of deep learning models in almost every field. Nevertheless, in the medical field, sharing the data of patients can raise several critical issues, such as privacy and security or even legal issues. Synthetic medical images have been proposed to overcome such challenges;these synthetic images are generated by learning the distribution of realistic medical images but completely different from them so that they can be shared and used across different medical institutions. Currently, the diffusion model (DM) has gained lots of attention due to its potential to generate realistic and high-resolution images, particularly outperforming generative adversarial networks (GANs) in many applications. The DM defines state of the art for various computer vision tasks such as image inpainting, class-conditional image synthesis, and others. However, the diffusion model is time and power consumption due to its large size. Therefore, this paper proposes a lightweight DM to synthesize the medical image;we use computer tomography (CT) scans for SARS-CoV-2 (Covid-19) as the training dataset. Then we do extensive simulations to show the performance of the proposed diffusion model in medical image generation, and then we explain the key component of the model. © 2023 IEEE.

18.
26th International Computer Science and Engineering Conference, ICSEC 2022 ; : 263-268, 2022.
Article in English | Scopus | ID: covidwho-2268496

ABSTRACT

Human face related digital technologies have been widely applied in various fields including face recognition based biometrics, facial landmarks based face deformation for gaming, facial reconstruction for those who are disfigured from an accident in the medical field and others. Such technologies typically rely on the information of a full, uncovered face and their performance would suffer varying degrees of deterioration according to the level of facial occlusion exhibited. 2D face recovery from occluded faces has therefore become an important research area as it is both crucial and desirable to attain full facial information before it is used in downstream tasks. In this paper, we address the problem of 2D face recovery from facial-mask occlusions, a pertinent issue that is widely observed in situations such as the Covid-19 pandemic. In recent trends, most researches are carried out through deep learning techniques to recover masked faces. The whole process consists of two tasks which are image segmentation and image inpainting. As U-Net is a typical deep learning model for image segmentation, but it also helpful in image inpainting and image colorization, so it has been frequently used in solving face recovery problems. To further explore the capability of U-Net and its variants for face recovery from masked faces, we propose to conduct a comparative study on several U-Net based models on a synthetic dataset that was generated based on public face datasets and mask generator. Results showed that Resnet U-Net and VGG16 U-Net had performed better in face recovery among the six different U-Net based models. © 2022 IEEE.

19.
IET Image Processing ; 2023.
Article in English | Scopus | ID: covidwho-2262151

ABSTRACT

For the purpose of solving the problems of missing edges and low segmentation accuracy in medical image segmentation, a medical image segmentation network (EAGC_UNet++) based on residual graph convolution UNet++ with edge attention gate (EAG) is proposed in the study. With UNet++ as the backbone network, the idea of graph theory is introduced into the model. First, the dropout residual graph convolution block (DropRes_GCN Block) and the traditional convolution structure in UNet++ are used as encoders. Second, EAGs are adopted so that the model pays more attention to image edge features during decoding. Finally, aiming at the imbalance problem of positive and negative samples in medical image segmentation, a new weighted loss function is introduced to enhance segmentation accuracy. In the experimental part, three datasets (LiTS2017, ISIC2018, COVID-19 CT scans) were used to evaluate the performances of various models;multiple groups of ablation experiments were designed to verify the effectiveness of each part of the model. The experimental results showed that EAGC_UNet++ had better segmentation performance than the other models under three quantitative evaluation indicators and better solved the problem of missing edges in medical image segmentation. © 2023 The Authors. IET Image Processing published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.

20.
2022 International Conference on Data Analytics for Business and Industry, ICDABI 2022 ; : 28-32, 2022.
Article in English | Scopus | ID: covidwho-2251046

ABSTRACT

The Covid-19 disease, which emerged in China in December 2019 and caused by the coronavirus virus, soon became a pandemic all over the world. The fact that the Transcription Polymerase Chain Reaction (RT-PCR) test produces false negatives in some studies and the diagnosis time is long, has led to the search for new alternatives for the diagnosis of this virus, which can result in death, especially with the damage it causes to the lungs. Therefore, chest images have become suitable tools for diagnosis from chest images with data obtained from Computed Tomography or CXR imaging techniques. Deep learning studies have been proposed to provide diagnosis with these tools and to determine the infected region of Covid-19 and Pneumonia disease. In this paper, a two-stage system is proposed as segmentation and classification. In the segmentation process, infected regions segmented from the labeled data were determined. In the classifier stage, Covid- 19/Pneumonia/Normal classification was performed using three different deep learning models named VGG16, ResNet50 and InceptionV3. To the best of our knowledge, this is the first attempt to sequentially design classification and segmentation systems into a more precise diagnosis. As a result of the study, 95% segmentation accuracy was obtained. Classifier models achieved 99%, 90% and 98% accuracy, respectively. © 2022 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL